55 research outputs found

    Automatic summarising: factors and directions

    Full text link
    This position paper suggests that progress with automatic summarising demands a better research methodology and a carefully focussed research strategy. In order to develop effective procedures it is necessary to identify and respond to the context factors, i.e. input, purpose, and output factors, that bear on summarising and its evaluation. The paper analyses and illustrates these factors and their implications for evaluation. It then argues that this analysis, together with the state of the art and the intrinsic difficulty of summarising, imply a nearer-term strategy concentrating on shallow, but not surface, text analysis and on indicative summarising. This is illustrated with current work, from which a potentially productive research programme can be developed

    The role of artificial intelligence in information retrieval

    Get PDF

    Towards better nlp system evaluation

    No full text
    This paper considers key elements of evaluation methodology, indicating the many points involved and advocating an unpacking approach in specifying an evaluation remit and design. Recognising the importance of both environment variables and system parameters leads to a grid organisation for tests. The paper illustrates the application of these notions through two examples. 1

    UK national programmers in natural language research

    No full text
    Resumen de la ponencia presentada por la autora realizado por el Comité de Redacción SEPLN

    Shifting meaning representations

    No full text
    corecore